Deep Residual Learning for Nonlinear Regression
نویسندگان
چکیده
منابع مشابه
Nonlinear random matrix theory for deep learning Nonlinear random matrix theory for deep learning
Neural network configurations with random weights play an important role in the analysis of deep learning. They define the initial loss landscape and are closely related to kernel and random feature methods. Despite the fact that these networks are built out of random matrices, the vast and powerful machinery of random matrix theory has so far found limited success in studying them. A main obst...
متن کاملNonparametric Regression for Learning Nonlinear Transformations
Information processing in animals and artificial movement systems consists of a series of transformations that map sensory signals to intermediate representations, and finally to motor commands. Given the physical and neuroanatomical differences between individuals and the need for plasticity during development, it is highly likely that such transformations are learned rather than pre-programme...
متن کاملLearning Local Error Bars for Nonlinear Regression
We present a new method for obtaining local error bars for nonlinear regression, i.e., estimates of the confidence in predicted values that depend on the input. We approach this problem by applying a maximumlikelihood framework to an assumed distribution of errors. We demonstrate our method first on computer-generated data with locally varying, normally distributed target noise. We then apply i...
متن کاملDeep Residual Learning and PDEs on Manifold
In this paper, we formulate the deep residual network (ResNet) as a control problem of transport equation. In ResNet, the transport equation is solved along the characteristics. Based on this observation, deep neural network is closely related to the control problem of PDEs on manifold. We propose several models based on transport equation, Hamilton-Jacobi equation and Fokker-Planck equation. T...
متن کاملNonlinear random matrix theory for deep learning
Neural network configurations with random weights play an important role in the analysis of deep learning. They define the initial loss landscape and are closely related to kernel and random feature methods. Despite the fact that these networks are built out of random matrices, the vast and powerful machinery of random matrix theory has so far found limited success in studying them. A main obst...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Entropy
سال: 2020
ISSN: 1099-4300
DOI: 10.3390/e22020193